99 research outputs found

    An approach to filtering RFID data streams

    Full text link
    RFID is gaining significant thrust as the preferred choice of automatic identification and data collection system. However, there are various data processing and management problems such as missed readings and duplicate readings which hinder wide scale adoption of RFID systems. To this end we propose an approach that filters the captured data including both noise removal and duplicate elimination. Experimental results demonstrate that the proposed approach improves missed data restoration process when compared with the existing method.<br /

    Enhancing RFID data quality and reliability

    Full text link
    This thesis addressed the problem of data quality, reliability and energy consumption of networked Radio Frequency Identification systems for business intelligence applications decision making processes. The outcome of the research substantially improved the accuracy and reliability of RFID generated data as well as energy depletion thus prolonging RFID system lifetime

    RFID data reliability optimiser based on two dimensions bloom filter

    Get PDF
    Radio frequency identification (RFID) is a flexible deployment technology that has been adopted in many applications especially in supply chain management. RFID system used radio waves to perform wireless interaction to detect and read data from the tagged object. However, RFID data streams contain a lot of false positive and duplicate readings. Both types of readings need to be removes to ensure reliability of information produced from the data streams. In this paper, a single approach, which based on Bloom filter was proposed to remove both dirty data from the RFID data streams. The noise and duplicate data filtering algorithm was constructed based on bloom filter. There are two bloom filters in one algorithm where each filter holds function either to remove noise data and to recognize data as correct reading from duplicate data reading. Experimental results show that our proposed approach outperformed other existing approaches in terms of data reliability

    Printing conductive ink tracks on textile materials

    Get PDF
    Textile materials with integrated electrical features are capable of creating intelligent articles with wide range of applications such as sports, work wear, health care, safety and others. Traditionally the techniques used to create conductive textiles are conductive fibers, treated conductive fibers, conductive woven fabrics and conductive ink. The technologies to print conductive ink on textile materials are still under progress of development thus this study is to investigate the feasibility of printing conductive ink using manual, silk screen printing and on-shelf modified ink jet printer. In this study, the two points probe resistance test (IV Resistance Test) is employed to measure the resistance for all substrates. The surface finish and the thickness of the conductive inks track were measured using the optical microscope. The functionality of the electronics structure printed was tested by introducing strain via bending test to determine its performance in changing resistance when bent. It was found that the resistance obtained from manual method and single layer conductive ink track by silkscreen process were as expected. But this is a different case for the double layer conductive ink tracks by silkscreen where the resistance acquired shows a satisfactory result as expected. A micro-structure analysis shows the surface finish for the single layer conductive inks tracks were not good enough compared to the double conductive ink track. Furthermore, the bending tests provide expected result if increasing of the bend angle will decrease the level of conductivity. The silver conductive paint RS186-3600 could provide low resistance which was below 40 ohm after printed on fabrics material

    Information extraction from semi and unstructured data sources: a systematic literature review

    Get PDF
    Millions of structured, semi structured and unstructured documents have been produced around the globe on a daily basis. Sources of such documents are individuals as well as several research societies like IEEE, Elsevier, Springer and Wiley that we use to publish the scientific documents enormously. These documents are a huge resource of scientific knowledge for research communities and interested users around the world. However, due to their massive volume and varying document formats, search engines are facing problems in indexing such documents, thus making retrieval of information inefficient, tedious and time consuming. Information extraction from such documents is among the hottest areas of research in data/text mining. As the number of such documents is increasing tremendously, more sophisticated information extraction techniques are necessary. This research focuses on reviewing and summarizing existing state-of-theart techniques in information extraction to highlight their limitations. Consequently, the research gap is formulated for the researchers in information extraction domain

    Web transcript verification using check digit as secure number

    Get PDF
    The case of fraudulent on academic qualification document is increasing due to the advancement of editing technology. This makes document forgery easy to be done. It has becoming a worldwide problem to verify the academic documents including the degree’s certificate and academic transcript. It is frightening situation when people without the right qualification working as professional in the area that will harm the society. Manual verification can be time consuming and not practical to be implemented. Therefore, it is paramount to have a web-based solution that can function all day long to perform this task. In this paper, dual check digit approach is proposed to verify the authenticity of an academic transcript. Three check digit methods which are Universal Product Code Mod 10 (UPC Mod 10), International Standard Book Number (ISBN Mod 11), and Luhn Mod 10 have been applied to test their reliability in carrying this task. The simulation results show that the combination of ISBN-11 and UPC Mod 10 performed very well to validate the authenticity of the transcripts

    Fitting statistical distribution of extreme rainfall data for the purpose of simulation

    Get PDF
    In this study, several types of probability distributions were used to fit the daily torrential rainfall data from 15 monitoring stations of Peninsular Malaysia from the period of 1975 to 2007. The study of fitting statistical distribution is important to find the most suitable model that could anticipate extreme events of certain natural phenomena such as flood and tsunamis. The aim of the study is to determine which distribution fits well with the daily torrential Malaysian rainfall data. Generalized Pareto, Lognormal and Gamma distributions were the distributions that had been tested to fit the daily torrential rainfall amount in Peninsular Malaysia. First, the appropriate distribution of the daily torrential rainfall was identified within the selected distributions for rainfall stations. Then, data sets were generated based on probability distributions that mimic a daily torrential rainfall data. Graphical representation and goodness of fit tests were used in finding the best fit model. The Generalized Pareto was found to be the most appropriate distribution in describing the daily torrential rainfall amounts of Peninsular Malaysia. The outputs can be beneficial for the purpose of generating several sets of simulated data matrices that mimic the same characteristics of rainfall data in order to assess the performance of the modification method compared to classical method

    A deep contractive autoencoder for solving multiclass classification problems

    Get PDF
    Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure of CAE leads to its incapability of going into finer details present in the input features by missing the information worth consideration. Resultantly, the features extracted by CAE lack the true representation of all the input features and the classifier fails in solving classification problems efficiently. In this work, an improved variant of CAE is proposed based on layered architecture following feed forward mechanism named as deep CAE. In the proposed architecture, the normal CAEs are arranged in layers and inside each layer, the process of encoding and decoding take place. The features obtained from the previous CAE are given as inputs to the next CAE. Each CAE in all layers are responsible for reducing the reconstruction error thus resulting in obtaining the informative features. The feature set obtained from the last CAE is given as input to the softmax classifier for classification. The performance and efficiency of the proposed model has been tested on five MNIST variant-datasets. The results have been compared with standard SAE, DAE, RBM, SCAE, ScatNet and PCANet in term of training error, testing error and execution time. The results revealed that the proposed model outperform the aforementioned models

    SPATIAL CLUSTERING-BASED GAS STATION LOCATION DETERMINATION

    Get PDF
    The absence of gas stations built in Cibeber Subdistrict is not balanced with the high level of transportation use for ease of mobility among residents. The purpose of this research is to cluster data using K-Means clustering and spatial modeling to provide a potential location for the construction of gas stations in Cibeber District. Based on the research process that has been carried out using RStudio, the potential villages for the construction of gas stations consist of four villages, namely Cikotok, Cibeber, Neglasari, and Wanasari. As for the results of spatial modeling, Cibeber District has a total of 862 potential location points, and within the scope of potential villages, namely four villages, there are 233 potential location points. Then, after being processed with weighted products for optimization and getting the best location results, 3 potential locations were obtained, namely Tegalumbu Village located in Wanasari Village, Nagrak Village located in Cikotok Village, and Cinangga Village located in Cibeber Village

    A review on missing tags detection approaches in RFID system

    Get PDF
    Radio Frequency Identification (RFID) system can provides automatic detection on very large number of tagged objects within short time. With this advantage, it is been using in many areas especially in the supply chain management, manufacturing and many others. It has the ability to track individual object all away from the manufacturing factory until it reach the retailer store. However, due to its nature that depends on radio signal to do the detection, reading on tagged objects can be missing due to the signal lost. The signal lost can be caused by weak signal, interference and unknown source. Missing tag detection in RFID system is truly significant problem, because it makes system reporting becoming useless, due to the misleading information generated from the inaccurate readings. The missing detection also can invoke fake alarm on theft, or object left undetected and unattended for some period. This paper provides review regarding this issue and compares some of the proposed approaches including Window Sub-range Transition Detection (WSTD), Efficient Missing-Tag Detection Protocol (EMD) and Multi-hashing based Missing Tag Identification (MMTI) protocol. Based on the reviews it will give insight on the current challenges and open up for a new solution in solving the problem of missing tag detection
    • …
    corecore